105 research outputs found

    Does a given vector-matrix pair correspond to a PH distribution?

    Get PDF
    The analysis of practical queueing problems benefits if realistic distributions can be used as parameters. Phase type (PH) distributions can approximate many distributions arising in practice, but their practical applicability has always been limited when they are described by a non-Markovian vector–matrix pair. In this case it is hard to check whether the non-Markovian vector–matrix pair defines a non-negative matrix-exponential function or not. In this paper we propose a numerical procedure for checking if the matrix-exponential function defined by a non-Markovian vector–matrix pair can be represented by a Markovian vector–matrix pair with potentially larger size. If so, then the matrix-exponential function is non-negative. The proposed procedure is based on O’Cinneide’s characterization result, which says that a non-Markovian vector–matrix pair with strictly positive density on and with a real dominant eigenvalue has a Markovian representation. Our method checks the existence of a potential Markovian representation in a computationally efficient way utilizing the structural properties of the applied representation transformation procedure

    NetemCG – IP packet-loss injection using a continuous-time Gilbert model

    Get PDF
    Injection of IP packet loss is a versatile method for emulating real-world network conditions in performance studies. In order to reproduce realistic packet-loss patterns, stochastic fault-models are used. In this report we desribe our implementation of a Linux kernel module using a Continuous-Time Gilbert Model for packet-loss injection

    A survey on fault-models for QoS studies of service-oriented systems

    Get PDF
    This survey paper presents an overview of the fault-models available to the researcher who wants to parameterise system-models in order to study Quality- of-Service (QoS) properties of systems with service-oriented architecture. The concept of a system-model subsumes the whole spectrum between abstract mathematical models and testbeds based on actual implementations. Fault- models, on the other hand, are parameters to system-models. They introduce faults and disturbances into the system-model, thereby allowing the study of QoS under realistic conditions. In addition to a survey of existing fault- models, the paper also provides a discussion of available fault-classification schemes

    M87* in space, time, and frequency

    Full text link
    Observing the dynamics of compact astrophysical objects provides insights into their inner workings, thereby probing physics under extreme conditions. The immediate vicinity of an active supermassive black hole with its event horizon, photon ring, accretion disk, and relativistic jets is a perfect pace to study general relativity, magneto-hydrodynamics, and high energy plasma physics. The recent observations of the black hole shadow of M87* with Very Long Baseline Interferometry (VLBI) by the Event Horizon Telescope (EHT) open the possibility to investigate its dynamical processes on time scales of days. In this regime, radio astronomical imaging algorithms are brought to their limits. Compared to regular radio interferometers, VLBI networks typically have fewer antennas and low signal to noise ratios (SNRs). If the source is variable during the observational period, one cannot co-add data on the sky brightness distribution from different time frames to increase the SNR. Here, we present an imaging algorithm that copes with the data scarcity and the source's temporal evolution, while simultaneously providing uncertainty quantification on all results. Our algorithm views the imaging task as a Bayesian inference problem of a time-varying brightness, exploits the correlation structure between time frames, and reconstructs an entire, 2+1+1\mathbf{2+1+1} dimensional time-variable and spectrally resolved image at once. The degree of correlation in the spatial and the temporal domains is inferred from the data and no form of correlation is excluded a priori. We apply this method to the EHT observation of M87* and validate our approach on synthetic data. The time- and frequency-resolved reconstruction of M87* confirms variable structures on the emission ring on a time scale of days. The reconstruction indicates extended and time-variable emission structures outside the ring itself.Comment: 43 pages, 15 figures, 6 table

    Gossip routing, percolation, and restart in wireless multi-hop networks

    Get PDF
    Route and service discovery in wireless multi-hop networks applies flooding or gossip routing to disseminate and gather information. Since packets may get lost, retransmissions of lost packets are required. In many protocols the retransmission timeout is fixed in the protocol specification. In this technical report we demonstrate that optimization of the timeout is required in order to ensure proper functioning of flooding schemes. Based on an experimental study, we apply percolation theory and derive analytical models for computing the optimal restart timeout. To the best of our knowledge, this is the first comprehensive study of gossip routing, percolation, and restart in this context

    Efficient wide-field radio interferometry response

    Full text link
    Radio interferometers do not measure the sky brightness distribution directly but rather a modified Fourier transform of it. Imaging algorithms, thus, need a computational representation of the linear measurement operator and its adjoint, irrespective of the specific chosen imaging algorithm. In this paper, we present a C++ implementation of the radio interferometric measurement operator for wide-field measurements which is based on "improved ww-stacking". It can provide high accuracy (down to ≈10−12\approx 10^{-12}), is based on a new gridding kernel which allows smaller kernel support for given accuracy, dynamically chooses kernel, kernel support and oversampling factor for maximum performance, uses piece-wise polynomial approximation for cheap evaluations of the gridding kernel, treats the visibilities in cache-friendly order, uses explicit vectorisation if available and comes with a parallelisation scheme which scales well also in the adjoint direction (which is a problem for many previous implementations). The implementation has a small memory footprint in the sense that temporary internal data structures are much smaller than the respective input and output data, allowing in-memory processing of data sets which needed to be read from disk or distributed across several compute nodes before.Comment: 13 pages, 8 figure

    Stochastic models for dependable services

    Get PDF
    In this paper we investigate the use of stochastic models for analysing service-oriented systems. We propose an iterative hybrid approach using system measurements, testbed observations as well as formal models to derive a quantitative model of service-based systems that allows us to evaluate the effectiveness of the restart method in such systems. In cases where one is fortunate enough as to have access to a real system for measurements the obtained data often is lacking statistical significance or knowledge of the system is not sufficient to explain the data. A testbed may then be preferable as it allows for long experiment series and provides full control of the system's configuration. In order to provide meaningful data the testbed must be equipped with fault-injection using a suitable fault-model and an appropriate load model. We fit phase-type distributions to the data obtained from the testbed in order to represent the observed data in a model that can be used e.g. as a service process in a queueing model of our service-oriented system. The queueing model may be used to analyse different restart policies, buffer size or service disciplines. Results from the model can be fed into the testbed and provide it with better fault and load models thus closing the modelling loop

    Phase-type Distributions

    Get PDF
    Abstract Both analytical (Chapter ??) and simulation-and experimentation-based (Chapter ??) approaches to resilience assessment rely on models for the various phenomena that may affect the system under study. These models must be both accurate, in that they reflect the phenomenon well, and suitable for the chosen approach. Analytical methods require models that are analytically tractable, while methods for experimentation, such as fault-injection (see Chapter ??), require the efficient generation of random-variates from the models. Phase-type (PH) distributions are a versatile tool for modelling a wide range of real-world phenomena. These distributions can capture many important aspects of measurement data, while retaining analytical tractability and efficient random-variate generation. This chapter provides an introduction to the use of PH distributions in resilience assessment. The chapter starts with a discussion of the mathematical basics. We then describe tools for fitting PH distributions to measurement data, before illustrating application of PH distributions in analysis and in random-variate generation
    • …
    corecore